17 research outputs found

    Brave New GES World:A Systematic Literature Review of Gestures and Referents in Gesture Elicitation Studies

    Get PDF
    How to determine highly effective and intuitive gesture sets for interactive systems tailored to end users’ preferences? A substantial body of knowledge is available on this topic, among which gesture elicitation studies stand out distinctively. In these studies, end users are invited to propose gestures for specific referents, which are the functions to control for an interactive system. The vast majority of gesture elicitation studies conclude with a consensus gesture set identified following a process of consensus or agreement analysis. However, the information about specific gesture sets determined for specific applications is scattered across a wide landscape of disconnected scientific publications, which poses challenges to researchers and practitioners to effectively harness this body of knowledge. To address this challenge, we conducted a systematic literature review and examined a corpus of N=267 studies encompassing a total of 187, 265 gestures elicited from 6, 659 participants for 4, 106 referents. To understand similarities in users’ gesture preferences within this extensive dataset, we analyzed a sample of 2, 304 gestures extracted from the studies identified in our literature review. Our approach consisted of (i) identifying the context of use represented by end users, devices, platforms, and gesture sensing technology, (ii) categorizing the referents, (iii) classifying the gestures elicited for those referents, and (iv) cataloging the gestures based on their representation and implementation modalities. Drawing from the findings of this review, we propose guidelines for conducting future end-user gesture elicitation studies

    A software suite supporting the design of gesture elicitation studies

    No full text
    How can we provide designers and developers with some support to identify the most appropriate gestures for gestural user interfaces depending on their context of use? To address this research question, we develop GEStory and GESistant. To feed GEStory and implement GESistant, we have conducted two systematic literature reviews (SLR) of Gesture Elicitation Study (GES): a macroscopic analysis of 216 papers on their metadata, such as authors, definitions, year of publication, type of publication, participants, referents, parts of the body (finger, hand, wrist, arm, head, leg, foot, and Whole body), number of proposed gestures; a microscopic analysis of 267 papers analyzing and classifying the referents, the final gestures coming out the consensus set, their representation, and characterization. We also propose an assessment of the credibility of these studies as a measure for categorizing their strength of impact. Based on the information analyzed in our SLR, we identify opportunities for new studies focused on gesture elicitation with end users. As a result, we present our new GES that contributes to the literature. GEStory acts as an interactive design space for gestural interaction to inform researchers and practitioners about existing preferred gestures in different contexts of use, and enable the identification of gaps and opportunities for new studies. GESistant, a cloud computing platform that supports gesture elicitation studies distributed in time and space structured into the GES workflow.(FSA - Sciences de l'ingénieur) -- UCL, 202

    A Gesture Elicitation Study of Nose-Based Gestures

    No full text
    Presently, miniaturized sensors can be embedded in any small-size wearable to recognize movements on some parts of the human body. For example, an electrooculography-based sensor in smart glasses recognizes finger movements on the nose. To explore the interaction capabilities, this paper conducts a gesture elicitation study as a between-subjects experiment involving one group of 12 females and one group of 12 males, expressing their preferred nose-based gestures on 19 Internet-of-Things tasks. Based on classification criteria, the 912 elicited gestures are clustered into 53 unique gestures resulting in 23 categories, to form a taxonomy and a consensus set of 38 final gestures, providing researchers and practitioners with a larger base with six design guidelines. To test whether the measurement method impacts these results, the agreement scores and rates, computed for determining the most agreed gestures upon participants, are compared with the Condorcet and the de Borda count methods to observe that the results remain consistent, sometimes with a slightly different order. To test whether the results are sensitive to gender, inferential statistics suggest that no significant difference exists between males and females for agreement scores and rate

    CROSSIDE: A Cross-Surface Collaboration by Sketching Design Space

    No full text
    This paper introduces, motivates, defines, and exemplifies CROSSIDE, a design space for representing capabilities of a software for collaborative sketching in a cross-surface setting, i.e., when stakeholders are interacting with and across multiple interaction surfaces, ranging from low end devices such as smartwatches, mobile phones to high-end devices like wall displays. By determining the greatest common denominator in terms of system properties between forty-one references, the design space is structured according to seven dimensions: user configurations, surface configurations, input interaction techniques, work methods, tangibility, and device configurations. This design space is aimed at satisfying three virtues: descriptive (i.e., the ability to systematically describe any particular work in cross-surface interaction by sketching), comparative (i.e., the ability to consistently compare two or more works belonging to this area), and generative (i.e., the ability to generate new ideas by identifying potentially interesting, undercovered areas). A radar diagram graphically depicts the design space for these three virtues

    CROSSIDE: A Design Space for Characterizing Cross-Surface Collaboration by Sketching

    No full text
    This paper introduces, motivates, defines, and exemplifies CROSSIDE, a design space for representing capabilities of a software for collaborative sketching in a cross-surface setting, i.e., when stakeholders are interacting with and across multiple interaction surfaces, ranging from low-end devices such as smartwatches, mobile phones to high-end devices like wall displays. By determining the greatest common denominator in terms of system properties between forty-one references, the design space is structured according to seven dimensions: user configurations, surface configurations, input interaction techniques, work methods, tangibility, and device configurations. This design space is aimed at satisfying three virtues: descriptive (i.e., the ability to systematically describe any particular work in cross-surface interaction by sketching), comparative (i.e., the ability to consistently compare two or more works belonging to this area), and generative (i.e., the ability to generate new ideas by identifying potentially interesting, under covered areas). A radar diagram graphically depicts the design space for these three virtues to enable a visual representation of one or more instances

    Exploring user-defined gestures for lingual and palatal interaction

    No full text
    Individuals with motor disabilities can benefit from an alternative means of interacting with the world: using their tongue. The tongue possesses precise movement capabilities within the mouth, allowing individuals to designate targets on the palate. This form of interaction, known as lingual interaction, enables users to perform basic functions by utilizing their tongues to indicate positions. The purpose of this work is to identify the lingual and palatal gestures proposed by end-users. In order to achieve this goal, our initial step was to examine relevant literature on the subject, including clinical studies on the motor capacity of the tongue, devices detecting the movement of the tongue, and current lingual interfaces (e.g., using a wheelchair). Then, we conducted a Gesture Elicitation Study (GES) involving twenty-four (N = 24) participants, who proposed lingual and palatal gestures to perform nineteen (19) Internet of Things (IoT) referents, thus obtaining a corpus of 456 gestures. These gestures were clustered into similarity classes (80 unique gestures) and analyzed by dimension, nature, complexity, thinking time, and goodness-of-fit. Using the Agreement Rate methodology, we present a set of sixteen (16) gestures for a lingual and palatal interface, which serve as a basis for further comparison with gestures suggested by disabled people

    Theoretically-Defined vs. User-Defined Squeeze Gestures

    No full text
    This paper presents theoretical and empirical results about user-defined gesture preferences for squeezable objects by focusing on a particular object: a deformable cushion. We start with a theoretical analysis of potential gestures for this squeezable object by defining a multi-dimension taxonomy of squeeze gestures composed of 82 gesture classes. We then empirically analyze the results of a gesture elicitation study resulting in a set of N=32 participants X 21 referents = 672 elicited gestures, further classified into 26 gesture classes. We also contribute to the practice of gesture elicitation studies by explaining why we started from a theoretical analysis (by systematically exploring a design space of potential squeeze gestures) to end up with an empirical analysis (by conducting a gesture elicitation study afterward): the intersection of the results from these sources confirm or disconfirm consensus gestures. Based on these findings, we extract from the taxonomy a subset of recommended gestures that give rise to design implications for gesture interaction with squeezable objects

    Informing Future Gesture Elicitation Studies for Interactive Applications that Use Radar Sensing

    No full text
    We show how two recently introduced visual tools, RepliGES and GEStory, can be used conjointly to inform possible replications of Gesture Elicitation Studies (GES) with a case study centered on gestures that can be sensed with radars. Starting from a GES identified in GEStory, we employ the dimensions of the RepliGES space to enumerate eight possible ways to replicate that study towards gaining new insights into end user’s preferences for gesture-based interaction for applications that use radar sensor

    Eliciting Contact-based and Contactless Gestures with Radar-based Sensors

    No full text
    Radar sensing technologies now offer new opportunities for gesturally interacting with a smart environment by capturing microgestures via a chip that is embedded in a wearable device, such as a smartwatch, a finger or a ring. Such microgestures are issued at a very small distance from the device, regardless of whether they are contact-based, such as on the skin, or contactless. As this category of microgestures remains largely unexplored, this paper reports the results of a gesture elicitation study that was conducted with twenty-five participants who expressed their preferred user-defined gestures for interacting with a radar-based sensor on nineteen referents that represented frequent Internet-of-things tasks. This study clustered the 25 Ă— 19 = 475 initially elicited gestures into four categories of microgestures, namely, micro, motion, combined, and hybrid, and thirty-one classes of distinct gesture types and produced a consensus set of the nineteen most preferred microgestures. In a confirmatory study, twenty new participants selected gestures from this classification for thirty referents that represented tasks of various orders; they reached a high rate of agreement and did not identify any new gestures. This classification of radar-based gestures provides researchers and practitioners with a larger basis for exploring gestural interactions with radar-based sensors, such as for hand gesture recognition

    ePHoRt: Towards a reference architecture for tele-rehabilitation systems

    No full text
    In recent years, the software applications for medical assistance, including the telerehabilitation, have known a high and a continuous presence in the medical area. ePHoRt is a Web-Based Platform for the remote home monitoring rehabilitation exercises in patients after hip replacement surgery. It involves a learning phase and a serious game scheme for the execution and the evaluation of the exercises as part of a therapeutic program. A modular software architecture is proposed, under the patient perspective, to be used as a reference model for researchers or professionals who wish to carry out tele-rehabilitation platforms, and to guarantee security, flexibility and scalability. The architecture incorporates two main components. The first one manages the patient’ therapeutic programs taking into account two principles: (1) maintain loose coupling between the layers of the framework and (2) the Don’t Repeat Yourself (DRY). The second one evaluates the performed exercises in real time considering an independent acquisition mechanism for the patient movements and two artificial algorithms. The first algorithm allows evaluating the quality of the movements, while the second one allows assessing the levels of pain intensity by recognizing the patient’ emotions when performing the movements. Details of the components and the meta-model of the architecture are presented and discussed considering their advantages and disadvantages
    corecore